Gain Variation in Recurrent Error Propagation Networks

نویسنده

  • Steven J. Nowlan
چکیده

Abstra ct. A global gai n term is int rod uced into recurr ent a na log network s. T his gain ter m may he va ried as a. recurrent netwo rk se t-tles, similar to the way temperatu re is varied when "a nnealing" a networ k of stoc hast ic binary unit s. An err or propag ation algor it hm is presented which simultaneou sly opt imizes the weights and the gain schedule for a. recurrent network. Th e performance of thi s algorithm is comp ar ed to the sta nda rd back propagation algorit hm on a. diffi cult const raint sa tisfaction problem. An orde r of magnitude imp rovement in t he numb er of learni ng t rials requir ed is observed with t he new algorithm. Thi s improvement is obtained by allowing a. much larger region of weight space to sat isfy the problem. T he simultaneous opti mizat ion of weights and gain schedule lead s to a qualitatively different region of weight space t han t hat reached by op tim izati on of the weights alone. 1. I nt r o d u ct io n Neural networks ha ve received m uch att ent ion recentl y as plausible models for st udy ing the computational propert ies of massively parallel systems. Th ey have been app lied to a variety of problem s in signal processing, pattern recogniti on, speech processing, and more t raditional AI pro blem s. T hese models developed from ea rlier work in associative memo ries a nd models of cooperat ive compu tation in early vision processing {3,5,19]. Learni ng algori thms have been developed [1 ,18] that ena ble th ese networks to learn internal rep resentatio ns, allowing t hem to represent complex non-linear ma ppings. Two dist inct typ es of networks have been st udied qu ite exte nsively. T he first of t hese uses analog un its wit h a sigmoida l I/ O funct ion [8], and an erro r propagation algorithm for upd at ing t he weights to minim ize an error funct ion [16,17]. 1,,105t of t hese st udies have focused on st rict ly feed-forward networks. T he second type of network em ploys sto chast ic bina ry "T he a ut hor is curr …

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Efficient Short-Term Electricity Load Forecasting Using Recurrent Neural Networks

Short term load forecasting (STLF) plays an important role in the economic and reliable operation ofpower systems. Electric load demand has a complex profile with many multivariable and nonlineardependencies. In this study, recurrent neural network (RNN) architecture is presented for STLF. Theproposed model is capable of forecasting next 24-hour load profile. The main feature in this networkis ...

متن کامل

Equivalence of Equilibrium Propagation and Recurrent Backpropagation

Recurrent Backpropagation and Equilibrium Propagation are algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration where the prediction is made. In the second phase, Recurrent Backpropagation computes error derivatives whereas Equilibrium Propagation relaxes to an...

متن کامل

An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks

The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by ...

متن کامل

Green's Function Method for Fast On-Line Learning Algorithm of Recurrent Neural Networks

The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & el al., Werbos) and the forward propagation (Williams and Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the on-line requirement in many practical applications. Although the forward propagation algorithm can be used i...

متن کامل

Evolutionary Training for Dynamical Recurrent Neural Networks: An Application in Finantial Time Series Prediction

Theoretical and experimental studies have shown that traditional training algorithms for Dynamical Recurrent Neural Networks may suffer of local optima solutions, due to the error propagation across the recurrence. In the last years, many researchers have put forward different approaches to solve this problem, most of them being based on heuristic procedures. In this paper, the training capabil...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Complex Systems

دوره 2  شماره 

صفحات  -

تاریخ انتشار 1988